43 research outputs found

    Gay Subversion: Young Men Seeking Safety in Heterotopic Spaces

    Get PDF
    In the early years of the twenty-first century gay-themed texts with teenage protagonists are moving from being an isolated subgenre to becoming a more integrated part of this field of writing and viewing. Even though there is now more visible support for gay adolescents than previously, coming to terms with an emerging gay sexuality and deciding whether or not to declare that sexuality publicly with the inherent risk of marginalisation and loss of family and friends remains central to current gay-themed texts. For boys, the path to manhood can be a journey fraught with challenges but even more so for gay boys who must contend with the forces of ‘hegemonic’ (heterosexual) masculinities and the pressures to ‘do boy’ according to socially-sanctioned rules and norms. This article examines the ways the gay protagonists in three Young Adult novels—Leave Myself Behind by Bart Yates (2003), A Time Before Me by Michael Peronne (2005) and Sushi Central by Alasdair Duncan (2003)—and in two films— Prayers for Bobby (2009) and Geography Club (2013)—seek safety in heterotopic spaces. It is argued that heterotopias can provide safe spaces for the expression of same-sex desire among males, subverting the constraints of hegemonic masculinity and the large spatial sites in which they operate. The trope of ‘safe space’ can be used as a mechanism to segregate individuals who challenge the heterosexual/homosexual binary under the guise of providing for their safety. However, while safe spaces are mechanisms for the construction of gay identity, they can also generate homophobic retribution. Struggles for visible identities outside of safe spaces can incite violence when gay visibility threatens the normalised landscape

    A novel real-time computational framework for detecting catheters and rigid guidewires in cardiac catheterization procedures

    Get PDF
    Purpose: Catheters and guidewires are used extensively in cardiac catheterization procedures such as heart arrhythmia treatment (ablation), angioplasty and congenital heart disease treatment. Detecting their positions in fluoroscopic X-ray images is important for several clinical applications, for example, motion compensation, co-registration between 2D and 3D imaging modalities and 3D object reconstruction. Methods: For the generalized framework, a multiscale vessel enhancement filter is first used to enhance the visibility of wire-like structures in the X-ray images. After applying adaptive binarization method, the centerlines of wire-like objects were extracted. Finally, the catheters and guidewires were detected as a smooth path which is reconstructed from centerlines of target wire-like objects. In order to classify electrode catheters which are mainly used in electrophysiology procedures, additional steps were proposed. First, a blob detection method, which is embedded in vessel enhancement filter with no additional computational cost, localizes electrode positions on catheters. Then the type of electrode catheters can be recognized by detecting the number of electrodes and also the shape created by a series of electrodes. Furthermore, for detecting guiding catheters or guidewires, a localized machine learning algorithm is added into the framework to distinguish between target wire objects and other wire-like artifacts. The proposed framework were tested on total 10,624 images which are from 102 image sequences acquired from 63 clinical cases. Results: Detection errors for the coronary sinus (CS) catheter, lasso catheter ring and lasso catheter body are 0.56 ± 0.28 mm, 0.64 ± 0.36 mm and 0.66 ± 0.32 mm, respectively, as well as success rates of 91.4%, 86.3% and 84.8% were achieved. Detection errors for guidewires and guiding catheters are 0.62 ± 0.48 mm and success rates are 83.5%. Conclusion: The proposed computational framework do not require any user interaction or prior models and it can detect multiple catheters or guidewires simultaneously and in real-time. The accuracy of the proposed framework is sub-mm and the methods are robust toward low-dose X-ray fluoroscopic images, which are mainly used during procedures to maintain low radiation dose

    Automated colonoscopy withdrawal phase duration estimation using cecum detection and surgical tasks classification

    Get PDF
    Colorectal cancer is the third most common type of cancer with almost two million new cases worldwide. They develop from neoplastic polyps, most commonly adenomas, which can be removed during colonoscopy to prevent colorectal cancer from occurring. Unfortunately, up to a quarter of polyps are missed during colonoscopies. Studies have shown that polyp detection during a procedure correlates with the time spent searching for polyps, called the withdrawal time. The different phases of the procedure (cleaning, therapeutic, and exploration phases) make it difficult to precisely measure the withdrawal time, which should only include the exploration phase. Separating this from the other phases requires manual time measurement during the procedure which is rarely performed. In this study, we propose a method to automatically detect the cecum, which is the start of the withdrawal phase, and to classify the different phases of the colonoscopy, which allows precise estimation of the final withdrawal time. This is achieved using a Resnet for both detection and classification trained with two public datasets and a private dataset composed of 96 full procedures. Out of 19 testing procedures, 18 have their withdrawal time correctly estimated, with a mean error of 5.52 seconds per minute per procedure

    Polyp detection on video colonoscopy using a hybrid 2D/3D CNN

    Get PDF
    Colonoscopy is the gold standard for early diagnosis and pre-emptive treatment of colorectal cancer by detecting and removing colonic polyps. Deep learning approaches to polyp detection have shown potential for enhancing polyp detection rates. However, the majority of these systems are developed and evaluated on static images from colonoscopies, whilst in clinical practice the treatment is performed on a real-time video feed. Non-curated video data remains a challenge, as it contains low-quality frames when compared to still, selected images often obtained from diagnostic records. Nevertheless, it also embeds temporal information that can be exploited to increase predictions stability. A hybrid 2D/3D convolutional neural network architecture for polyp segmentation is presented in this paper. The network is used to improve polyp detection by encompassing spatial and temporal correlation of the predictions while preserving real-time detections. Extensive experiments show that the hybrid method outperforms a 2D baseline. The proposed architecture is validated on videos from 46 patients and on the publicly available SUN polyp database. A higher performance and increased generalisability indicate that real-world clinical implementations of automated polyp detection can benefit from the hybrid algorithm and the inclusion of temporal information

    Spatio-temporal classification for polyp diagnosis

    Get PDF
    Colonoscopy remains the gold standard investigation for colorectal cancer screening as it offers the opportunity to both detect and resect pre-cancerous polyps. Computer-aided polyp characterisation can determine which polyps need polypectomy and recent deep learning-based approaches have shown promising results as clinical decision support tools. Yet polyp appearance during a procedure can vary, making automatic predictions unstable. In this paper, we investigate the use of spatio-temporal information to improve the performance of lesions classification as adenoma or non-adenoma. Two methods are implemented showing an increase in performance and robustness during extensive experiments both on internal and openly available benchmark datasets

    3D/2D Registration with Superabundant Vessel Reconstruction for Cardiac Resynchronization Therapy

    Get PDF
    <p>Miscellaneous classes: consistent/single studies of pregnancy associated pharmacokinetic changes (percent calculated as pregnant/nonpregnant values).</p

    Identifying key mechanisms leading to visual recognition errors for missed colorectal polyps using eye-tracking technology

    Get PDF
    BACKGROUND AND AIMS: Lack of visual recognition of colorectal polyps may lead to interval cancers. The mechanisms contributing to perceptual variation, particularly for subtle and advanced colorectal neoplasia, has scarcely been investigated. We aimed to evaluate visual recognition errors and provide novel mechanistic insights. METHODS: Eleven participants (7 trainees, 4 medical students) evaluated images from the UCL polyp perception dataset, containing 25 polyps, using eye tracking equipment. Gaze errors were defined as those where the lesion was not observed according to eye tracking technology. Cognitive errors occurred when lesions were observed but not recognised as polyps by participants. A video study was also performed including 39 subtle polyps, where polyp recognition performance was compared with a convolutional neural network (CNN). RESULTS: Cognitive errors occurred more frequently than gaze errors overall (65.6%) , with a significantly higher proportion in trainees (P=0.0264). In the video validation, the CNN detected significantly more polyps than trainees and medical students, with per polyp sensitivities of 79.5%, 30.0% and 15.4% respectively. CONCLUSIONS: Cognitive errors were the most common reason for visual recognition errors. The impact of interventions such as artificial intelligence, particularly on different types of perceptual errors, needs further investigation including potential effects on learning curves. To facilitate future research, a publicly accessible visual perception colonoscopy polyp database was created
    corecore